94 research outputs found

    Existence and approximation of fixed points of right Bregman nonexpansive operators

    Get PDF
    We study the existence and approximation of fixed points of right Bregman nonexpansive operators in reflexive Banach space. We present, in particular, necessary and sufficient conditions for the existence of fixed points and an implicit scheme for approximating them

    Convergence and Perturbation Resilience of Dynamic String-Averaging Projection Methods

    Full text link
    We consider the convex feasibility problem (CFP) in Hilbert space and concentrate on the study of string-averaging projection (SAP) methods for the CFP, analyzing their convergence and their perturbation resilience. In the past, SAP methods were formulated with a single predetermined set of strings and a single predetermined set of weights. Here we extend the scope of the family of SAP methods to allow iteration-index-dependent variable strings and weights and term such methods dynamic string-averaging projection (DSAP) methods. The bounded perturbation resilience of DSAP methods is relevant and important for their possible use in the framework of the recently developed superiorization heuristic methodology for constrained minimization problems.Comment: Computational Optimization and Applications, accepted for publicatio

    Global convergence of a non-convex Douglas-Rachford iteration

    Get PDF
    We establish a region of convergence for the proto-typical non-convex Douglas-Rachford iteration which finds a point on the intersection of a line and a circle. Previous work on the non-convex iteration [2] was only able to establish local convergence, and was ineffective in that no explicit region of convergence could be given

    New results on q-positivity

    Get PDF
    In this paper we discuss symmetrically self-dual spaces, which are simply real vector spaces with a symmetric bilinear form. Certain subsets of the space will be called q-positive, where q is the quadratic form induced by the original bilinear form. The notion of q-positivity generalizes the classical notion of the monotonicity of a subset of a product of a Banach space and its dual. Maximal q-positivity then generalizes maximal monotonicity. We discuss concepts generalizing the representations of monotone sets by convex functions, as well as the number of maximally q-positive extensions of a q-positive set. We also discuss symmetrically self-dual Banach spaces, in which we add a Banach space structure, giving new characterizations of maximal q-positivity. The paper finishes with two new examples.Comment: 18 page

    Incremental proximal methods for large scale convex optimization

    Get PDF
    Laboratory for Information and Decision Systems Report LIDS-P-2847We consider the minimization of a sum∑m [over]i=1 fi (x) consisting of a large number of convex component functions fi . For this problem, incremental methods consisting of gradient or subgradient iterations applied to single components have proved very effective. We propose new incremental methods, consisting of proximal iterations applied to single components, as well as combinations of gradient, subgradient, and proximal iterations. We provide a convergence and rate of convergence analysis of a variety of such methods, including some that involve randomization in the selection of components.We also discuss applications in a few contexts, including signal processing and inference/machine learning.United States. Air Force Office of Scientific Research (grant FA9550-10-1-0412

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem
    • …
    corecore